487 research outputs found

    Soft interaction model and the LHC data

    Full text link
    Most models for soft interactions which were proposed prior to the measurements at the LHC, are only marginally compatible with LHC data, our GLM model has the same deficiency. In this paper we investigate possible causes of the problem, by considering separate fits to the high energy (W>500GeVW > 500\, GeV), and low energy (W<500GeVW < 500\, GeV) data. Our new results are moderately higher than our previous predictions. Our results for total and elastic cross sections are systematically lower that the recent Totem and Alice published values, while our results for the inelastic and forward slope agree with the data. If with additional experimental data, the errors are reduced, while the central cross section values remain unchanged, we will need to reconsider the physics on which our model is built.Comment: 12 pp, 12 figures in .eps file

    Specifying and Verifying Concurrent Algorithms with Histories and Subjectivity

    Full text link
    We present a lightweight approach to Hoare-style specifications for fine-grained concurrency, based on a notion of time-stamped histories that abstractly capture atomic changes in the program state. Our key observation is that histories form a partial commutative monoid, a structure fundamental for representation of concurrent resources. This insight provides us with a unifying mechanism that allows us to treat histories just like heaps in separation logic. For example, both are subject to the same assertion logic and inference rules (e.g., the frame rule). Moreover, the notion of ownership transfer, which usually applies to heaps, has an equivalent in histories. It can be used to formally represent helping---an important design pattern for concurrent algorithms whereby one thread can execute code on behalf of another. Specifications in terms of histories naturally abstract granularity, in the sense that sophisticated fine-grained algorithms can be given the same specifications as their simplified coarse-grained counterparts, making them equally convenient for client-side reasoning. We illustrate our approach on a number of examples and validate all of them in Coq.Comment: 17 page

    Exclusive double-diffractive production of open charm in proton-proton and proton-antiproton collisions

    Full text link
    We calculate differential cross sections for exclusive double diffractive (EDD) production of open charm in proton-proton and proton-antiproton collisions. Sizeable cross sections are found. The EDD contribution constitutes about 1 % of the total inclusive cross section for open charm production. A few differential distributions are shown and discussed. The EDD contribution falls faster both with transverse momentum of the cc quark/antiquark and the ccˉc \bar c invariant mass than in the inclusive case.Comment: 11 pages, 7 figure

    Survival probability for exclusive central diffractive production of colorless states at the LHC

    Full text link
    In this paper we discuss the survival probability for exclusive central diffractive production of a colorless small size system at the LHC. This process has a clear signature of two large rapidity gaps. Using the eikonal approach for the description of soft interactions, we predict the value of the survival probability to be about 5~6% for single channel models, while for a two channel model the survival probability is about 3%. The dependence of the survival probability factor (damping factor) on the transverse momenta of the recoiled protons is discussed, and we suggest it be measured at the Tevatron so as to minimize the possible ambiguity in the calculation of survival probability at the LHC.Comment: 33 pages, 26 figure

    Unitarity Corrections to the Proton Structure Functions through the Dipole Picture

    Full text link
    We study the dipole picture for the description of the deep inelastic scattering, focusing on the structure functions which are driven directly by the gluon distribution. One performs estimates using the effective dipole cross section given by the Glauber-Mueller approach in QCD, which encodes the corrections due to the unitarity effects associated with the saturation phenomenon. We also address issues about frame invariance of the calculations when analysing the observables.Comment: 16 pages, 8 figures. Version to be published in Phys. Rev.

    Linearizability with Ownership Transfer

    Full text link
    Linearizability is a commonly accepted notion of correctness for libraries of concurrent algorithms. Unfortunately, it assumes a complete isolation between a library and its client, with interactions limited to passing values of a given data type. This is inappropriate for common programming languages, where libraries and their clients can communicate via the heap, transferring the ownership of data structures, and can even run in a shared address space without any memory protection. In this paper, we present the first definition of linearizability that lifts this limitation and establish an Abstraction Theorem: while proving a property of a client of a concurrent library, we can soundly replace the library by its abstract implementation related to the original one by our generalisation of linearizability. This allows abstracting from the details of the library implementation while reasoning about the client. We also prove that linearizability with ownership transfer can be derived from the classical one if the library does not access some of data structures transferred to it by the client

    Novel Mechanism of Nucleon Stopping in Heavy Ion Collisions

    Get PDF
    When a diquark does not fragment directly but breaks in such a way that only one of its quarks gets into the produced baryon, the latter is produced closer to mid rapidities. The relative size of this diquark breaking component increases quite fast with increasing energy. We show that at a given energy it also increases with the atomic mass number and with the centrality of the collision and that it allows to explain the rapidity distribution of the net baryon number (pp-pˉ\bar{p}) in SSSS central collisions. Predictions for PbPb-PbPb collisions are presented.Comment: 10 pages, Latex file and 6 PostSript figures uuencoded in one fil

    The survival probability of large rapidity gaps in a three channel model

    Get PDF
    The values and energy dependence for the survival probability <S2>< \mid S\mid^2 > of large rapidity gaps (LRG) are calculated in a three channel model. This model includes single and double diffractive production, as well as elastic rescattering. It is shown that decreases with increasing energy, in line with recent results for LRG dijet production at the Tevatron. This is in spite of the weak dependence on energy of the ratio (σel+σSD)/σtot (\sigma_{el}+ \sigma_{SD})/\sigma_{tot}.Comment: 26 pages in latex file,11 figures in eps file

    Saturation Effects in Deep Inelastic Scattering at low Q2Q^2 and its Implications on Diffraction

    Full text link
    We present a model based on the concept of saturation for small Q2Q^2 and small xx. With only three parameters we achieve a good description of all Deep Inelastic Scattering data below x=0.01x=0.01. This includes a consistent treatment of charm and a successful extrapolation into the photoproduction regime. The same model leads to a roughly constant ratio of diffractive and inclusive cross section.Comment: 24 pages, 12 figures, Latex-fil

    Tighter Relations Between Sensitivity and Other Complexity Measures

    Full text link
    Sensitivity conjecture is a longstanding and fundamental open problem in the area of complexity measures of Boolean functions and decision tree complexity. The conjecture postulates that the maximum sensitivity of a Boolean function is polynomially related to other major complexity measures. Despite much attention to the problem and major advances in analysis of Boolean functions in the past decade, the problem remains wide open with no positive result toward the conjecture since the work of Kenyon and Kutin from 2004. In this work, we present new upper bounds for various complexity measures in terms of sensitivity improving the bounds provided by Kenyon and Kutin. Specifically, we show that deg(f)^{1-o(1)}=O(2^{s(f)}) and C(f) < 2^{s(f)-1} s(f); these in turn imply various corollaries regarding the relation between sensitivity and other complexity measures, such as block sensitivity, via known results. The gap between sensitivity and other complexity measures remains exponential but these results are the first improvement for this difficult problem that has been achieved in a decade.Comment: This is the merged form of arXiv submission 1306.4466 with another work. Appeared in ICALP 2014, 14 page
    corecore